Goto

Collaborating Authors

 cognitive science



MoCa: Measuring Human-Language Model Alignment on Causal and Moral Judgment Tasks

Neural Information Processing Systems

Human commonsense understanding of the physical and social world is organized around intuitive theories. These theories support making causal and moral judgments. When something bad happens, we naturally ask: who did what, and why? A rich literature in cognitive science has studied people's causal and moral intuitions. This work has revealed a number of factors that systematically influence people's judgments, such as the violation of norms and whether the harm is avoidable or inevitable.


Learning Structure from the Ground up---Hierarchical Representation Learning by Chunking

Neural Information Processing Systems

From learning to play the piano to speaking a new language, reusing and recombining previously acquired representations enables us to master complex skills and easily adapt to new environments. Inspired by the Gestalt principle of \textit{grouping by proximity} and theories of chunking in cognitive science, we propose a hierarchical chunking model (HCM).


Structured Cognitive Loop for Behavioral Intelligence in Large Language Model Agents

Kim, Myung Ho

arXiv.org Artificial Intelligence

Large language models have advanced natural language understanding and generation, but their use as autonomous agents introduces architectural challenges for multi-step tasks. Existing frameworks often mix cognition, memory, and control in a single prompt, reducing coherence and predictability. The Structured Cognitive Loop (SCL) is proposed as an alternative architecture that separates these functions. In SCL, the language model handles cognition, memory is stored externally, and execution is guided by a lightweight controller within a goal-directed loop. This design allows intermediate results to be recorded and verified before actions are taken, improving traceability and evaluation. SCL is evaluated against prompt-based baselines such as ReAct and LangChain agents across three tasks: travel planning, conditional email drafting, and constraint-guided image generation. Under matched settings, SCL achieves an average task success rate of 86.3 percent, compared with 70.5 to 76.8 percent for baselines. It also shows higher goal fidelity, fewer redundant calls, and reduced unsupported assertions. These results indicate that separating cognition, memory, and control can enhance reliability and interpretability without relying on larger models or heavier prompts. The findings should be regarded as preliminary evidence, with broader tests across model families and task domains planned for future work.



Thanks to Reviewer # 1 and # 4 for pointing out that behavioral work in cognitive science suggests that people indeed

Neural Information Processing Systems

Thank you all for your helpful comments on our Comp Neuro paper. If the results of Figure 1 are indicative, this could further improve the results. The supervised training phase is depicted in the somewhat busy Fig. S2. While we disagree with Reviewer #2's opinion that the connection between neural regression and GPs is completely


The normalization of (almost) everything: Our minds can get used to anything, and even crises start feeling normal Science

Science

For a long time, many climate scientists and advocates held onto an optimistic belief that once the impacts of climate change became undeniable, people and governments would act. But whereas the predictions of climate models have increasingly borne out, the assumptions about human behavior have not. Even as disasters mount, climate change remains low on voters' priority lists, and policy responses remain tepid. To me, this gap reflects a deeper failure--not just in policy or communication, but in how we understand human adaptability. When I began my career as a computational cognitive scientist, I was drawn to a defining strength of human cognition--a marked ability to adapt.


Advancing Cognitive Science with LLMs

Wulff, Dirk U., Mata, Rui

arXiv.org Artificial Intelligence

Cognitive science faces ongoing challenges in knowledge synthesis and conceptual clarity, in part due to its multifaceted and interdisciplinary nature. Recent advances in artificial intelligence, particularly the development of large language models (LLMs), offer tools that may help to address these issues. This review examines how LLMs can support areas where the field has historically struggled, including establishing cross-disciplinary connections, formalizing theories, developing clear measurement taxonomies, achieving generalizability through integrated modeling frameworks, and capturing contextual and individual variation. We outline the current capabilities and limitations of LLMs in these domains, including potential pitfalls. Taken together, we conclude that LLMs can serve as tools for a more integrative and cumulative cognitive science when used judiciously to complement, rather than replace, human expertise.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Summary: This very strong paper proposes a rational model for algorithm selection based on problem features and Bayesian regression. The model is shown to be effective computationally and to better predict human performance than comparable models. This paper is the epitome of a strong NIPS paper. The paper is clearly written and addresses an interesting problem. There is both a nice computational result about the algorithm and a cognitive model that is tested with a brief experiment.


Thanks to Reviewer # 1 and # 4 for pointing out that behavioral work in cognitive science suggests that people indeed

Neural Information Processing Systems

Thank you all for your helpful comments on our Comp Neuro paper. If the results of Figure 1 are indicative, this could further improve the results. The supervised training phase is depicted in the somewhat busy Fig. S2. While we disagree with Reviewer #2's opinion that the connection between neural regression and GPs is completely